DiscoverDoom DebatesLiron Debunks The Most Common “AI Won't Kill Us" Arguments
Liron Debunks The Most Common “AI Won't Kill Us" Arguments

Liron Debunks The Most Common “AI Won't Kill Us" Arguments

Update: 2025-11-05
Share

Description

Today I’m sharing my AI doom interview on Donal O’Doherty’s podcast.

I lay out the case for having a 50% p(doom). Then Donal plays devil’s advocate and tees up every major objection the accelerationists throw at doomers.

See if the anti-doom arguments hold up, or if the AI boosters are just serving sophisticated cope.

Timestamps

0:00 — Introduction & Liron’s Background

1:29 — Liron’s Worldview: 50% Chance of AI Annihilation

4:03 — Rationalists, Effective Altruists, & AI Developers

5:49 — Major Sources of AI Risk

8:25 — The Alignment Problem

10:08 — AGI Timelines

16:37 — Will We Face an Intelligence Explosion?

29:29 — Debunking AI Doom Counterarguments

1:03:16 — Regulation, Policy, and Surviving The Future With AI

Show Notes

If you liked this episode, subscribe to the Collective Wisdom Podcast for more deeply researched AI interviews: https://www.youtube.com/@DonalODoherty

Transcript

Introduction & Liron’s Background

Donal O’Doherty 00:00:00 Today I’m speaking with Liron Shapira. Liron is an investor, he’s an entrepreneur, he’s a rationalist, and he also has a popular podcast called Doom Debates, where he debates some of the greatest minds from different fields on the potential of AI risk.

Liron considers himself a doomer, which means he worries that artificial intelligence, if it gets to superintelligence level, could threaten the integrity of the world and the human species.

Donal 00:00:24 Enjoy my conversation with Liron Shapira.

Donal 00:00:30 Liron, welcome. So let’s just begin. Will you tell us a little bit about yourself and your background, please? I will have introduced you, but I just want everyone to know a bit about you.

Liron Shapira 00:00:39 Hey, I’m Liron Shapira. I’m the host of Doom Debates, which is a YouTube show and podcast where I bring in luminaries on all sides of the AI doom argument.

Liron 00:00:49 People who think we are doomed, people who think we’re not doomed, and we hash it out. We try to figure out whether we’re doomed. I myself am a longtime AI doomer. I started reading Yudkowsky in 2007, so it’s been 18 years for me being worried about doom from artificial intelligence.

My background is I’m a computer science bachelor’s from UC Berkeley.

Liron 00:01:10 I’ve worked as a software engineer and an entrepreneur. I’ve done a Y Combinator startup, so I love tech. I’m deep in tech. I’m deep in computer science, and I’m deep into believing the AI doom argument.

I don’t see how we’re going to survive building superintelligent AI. And so I’m happy to talk to anybody who will listen. So thank you for having me on, Donal.

Donal 00:01:27 It’s an absolute pleasure.

Liron’s Worldview: 50% Chance of AI Annihilation

Donal 00:01:29 Okay, so a lot of people where I come from won’t be familiar with doomism or what a doomer is. So will you just talk through, and I’m very interested in this for personal reasons as well, your epistemic and philosophical inspirations here. How did you reach these conclusions?

Liron 00:01:45 So I often call myself a Yudkowskian, in reference to Eliezer Yudkowsky, because I agree with 95% of what he writes, the Less Wrong corpus. I don’t expect everybody to get up to speed with it because it really takes a thousand hours to absorb it all.

I don’t think that it’s essential to spend those thousand hours.

Liron 00:02:02 I think that it is something that you can get in a soundbite, not a soundbite, but in a one-hour long interview or whatever. So yeah, I think you mentioned epistemic roots or whatever, right? So I am a Bayesian, meaning I think you can put probabilities on things the way prediction markets are doing.

Liron 00:02:16 You know, they ask, oh, what’s the chance that this war is going to end? Or this war is going to start, right? What’s the chance that this is going to happen in this sports game? And some people will tell you, you can’t reason like that.

Whereas prediction markets are like, well, the market says there’s a 70% chance, and what do you know? It happens 70% of the time. So is that what you’re getting at when you talk about my epistemics?

Donal 00:02:35 Yeah, exactly. Yeah. And I guess I’m very curious as well about, so what Yudkowsky does is he conducts thought experiments. Because obviously some things can’t be tested, we know they might be true, but they can’t be tested in experiments.

Donal 00:02:49 So I’m just curious about the role of philosophical thought experiments or maybe trans-science approaches, in terms of testing questions that we can’t actually conduct experiments on.

Liron 00:03:00 Oh, got it. Yeah. I mean this idea of what can and can’t be tested. I mean, tests are nice, but they’re not the only way to do science and to do productive reasoning.

Liron 00:03:10 There are times when you just have to do your best without a perfect test. You know, a recent example was the James Webb Space Telescope, right? It’s the successor to the Hubble Space Telescope. It worked really well, but it had to get into this really difficult orbit.

This very interesting Lagrange point, I think in the solar system, they had to get it there and they had to unfold it.

Liron 00:03:30 It was this really compact design and insanely complicated thing, and it had to all work perfectly on the first try. So you know, you can test it on earth, but earth isn’t the same thing as space.

So my point is just that as a human, as a fallible human with a limited brain, it turns out there’s things you can do with your brain that still help you know the truth about the future, even when you can’t do a perfect clone of an experiment of the future.

Liron 00:03:52 And so to connect that to the AI discussion, I think we know enough to be extremely worried about superintelligent AI. Even though there is not in fact a superintelligent AI in front of us right now.

Donal 00:04:03 Interesting.

Rationalists, Effective Altruists, & AI Developers

Donal 00:04:03 And just before we proceed, will you talk a little bit about the EA community and the rationalist community as well? Because a lot of people won’t have heard of those terms where I come from.

Liron 00:04:13 Yes. So I did mention Eliezer Yudkowsky, who’s kind of the godfather of thinking about AI safety. He was also the father of the modern rationality community. It started around 2007 when he was online blogging at a site called Overcoming Bias, and then he was blogging on his own site called Less Wrong.

And he wrote The Less Wrong Sequences and a community formed around him that also included previous rationalists, like Carl Feynman, the son of Richard Feynman.

Liron 00:04:37 So this community kind of gathered together. It had its origins in Usenet and all that, and it’s been going now for 18 years. There’s also the Center for Applied Rationality that’s part of the community.

There’s also the effective altruism community that you’ve heard of. You know, they try to optimize charity and that’s kind of an offshoot of the rationality community.

Liron 00:04:53 And now the modern AI community, funny enough, is pretty closely tied into the rationality community from my perspective. I’ve just been interested to use my brain rationally. What is the art of rationality? Right? We throw this term around, people think of Mr. Spock from Star Trek, hyper-rational.

Oh captain, you know, logic says you must do this.

Liron 00:05:12 People think of rationality as being kind of weird and nerdy, but we take a broader view of rationality where it’s like, listen, you have this tool, you have this brain in your head. You’re trying to use the brain in your head to get results.

The James Webb Space Telescope, that is an amazing success story where a lot of people use their brains very effectively, even better than Spock in Star Trek.

Liron 00:05:30 That took moxie, right? That took navigating bureaucracy, thinking about contingencies. It wasn’t a purely logical matter, but whatever it was, it was a bunch of people using their brains, squeezing the juice out of their brains to get results.

Basically, that’s kind of broadly construed what we rationalists are trying to do.

Donal 00:05:49 Okay. Fascinating.

Major Sources of AI Risk

Donal 00:05:49 So let’s just quickly lay out the major sources of AI risk. So you could have misuse, so things like bioterror, you could have arms race dynamics. You could also have organizational failures, and then you have rogue AI.

So are you principally concerned about rogue AI? Are you also concerned about the other ones on the potential path to having rogue AI?

Liron 00:06:11 My personal biggest concern is rogue AI. The way I see it, you know, different people think different parts of the problem are bigger. The way I see it, this brain in our head, it’s very impressive. It’s a two-pound piece of meat, right? Piece of fatty cells, or you know, neuron cells.

Liron 00:06:27 It’s pretty amazing, but it’s going to get surpassed, you know, the same way that manmade airplanes have s

Comments 
loading
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Liron Debunks The Most Common “AI Won't Kill Us" Arguments

Liron Debunks The Most Common “AI Won't Kill Us" Arguments

Liron Shapira